health record
Microsoft's Copilot Health can use AI to turn your fitness data and medical records 'into a coherent story'
Microsoft's Copilot Health can use AI to turn your fitness data and medical records'into a coherent story' The aim is to help users have the right context and questions to take to their doctor. Microsoft has unveiled Copilot Health, an AI-powered tool it claims can help make sense of your medical records, health history and fitness data from wearables, should you grant it access to that information. The company said it will be in a separate, secure space in the Copilot app and that the idea is to help provide you with more context and insights so you can ask your doctor the right questions when you see them. Copilot Health is designed to help you better understand your medical information as a whole, Microsoft says. It is not intended to diagnose, treat or prevent diseases or other conditions and is not a substitute for professional medical advice, the company pointed out in a blog post.
- Health & Medicine > Health Care Technology > Medical Record (0.92)
- Health & Medicine > Consumer Health (0.90)
- Asia > China (0.05)
- North America > United States > Massachusetts (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Israel (0.04)
Dual-stage and Lightweight Patient Chart Summarization for Emergency Physicians
Wu, Jiajun, Zaidi, Swaleh, Teitge, Braden, Leung, Henry, Zhou, Jiayu, Holodinsky, Jessalyn, Drew, Steve
Electronic health records (EHRs) contain extensive unstructured clinical data that can overwhelm emergency physicians trying to identify critical information. We present a two-stage summarization system that runs entirely on embedded devices, enabling offline clinical summarization while preserving patient privacy. In our approach, a dual-device architecture first retrieves relevant patient record sections using the Jetson Nano-R (Retrieve), then generates a structured summary on another Jetson Nano-S (Summarize), communicating via a lightweight socket link. The summarization output is two-fold: (1) a fixed-format list of critical findings, and (2) a context-specific narrative focused on the clinician's query. The retrieval stage uses locally stored EHRs, splits long notes into semantically coherent sections, and searches for the most relevant sections per query. The generation stage uses a locally hosted small language model (SLM) to produce the summary from the retrieved text, operating within the constraints of two NVIDIA Jetson devices. We first benchmarked six open-source SLMs under 7B parameters to identify viable models. We incorporated an LLM-as-Judge evaluation mechanism to assess summary quality in terms of factual accuracy, completeness, and clarity. Preliminary results on MIMIC-IV and de-identified real EHRs demonstrate that our fully offline system can effectively produce useful summaries in under 30 seconds.
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.14)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Europe (0.04)
- Health & Medicine > Health Care Technology > Medical Record (1.00)
- Health & Medicine > Diagnostic Medicine (0.88)
RL Is a Hammer and LLMs Are Nails: A Simple Reinforcement Learning Recipe for Strong Prompt Injection
Wen, Yuxin, Zharmagambetov, Arman, Evtimov, Ivan, Kokhlikyan, Narine, Goldstein, Tom, Chaudhuri, Kamalika, Guo, Chuan
Prompt injection poses a serious threat to the reliability and safety of LLM agents. Recent defenses against prompt injection, such as Instruction Hierarchy and SecAlign, have shown notable robustness against static attacks. However, to more thoroughly evaluate the robustness of these defenses, it is arguably necessary to employ strong attacks such as automated red-teaming. To this end, we introduce RL-Hammer, a simple recipe for training attacker models that automatically learn to perform strong prompt injections and jailbreaks via reinforcement learning. RL-Hammer requires no warm-up data and can be trained entirely from scratch. To achieve high ASRs against industrial-level models with defenses, we propose a set of practical techniques that enable highly effective, universal attacks. Using this pipeline, RL-Hammer reaches a 98% ASR against GPT -4o and a 72% ASR against GPT -5 with the Instruction Hierarchy defense. We further discuss the challenge of achieving high diversity in attacks, highlighting how attacker models tend to "reward-hack" diversity objectives. Finally, we show that RL-Hammer can evade multiple prompt injection detectors. We hope our work advances automatic red-teaming and motivates the development of stronger, more principled defenses. More recently, a new paradigm has emerged that allows LLMs to behave as autonomous agents in complex environments, including full-fledged operating systems, integrated software platforms, and multi-step tool pipelines. In these contexts, LLMs can function as coding assistants, system administrators, and even academic researchers. Notable examples include Microsoft Copilot (GitHub, 2025), Anthropic Claude Computer Use (Anthropic, 2024), OpenAI Operator (OpenAI, 2025), and Zochi (Intology, 2025), each demonstrating the potential to combine sophisticated reasoning with direct system control. As these capabilities continue to advance, LLM agents are expected to be integrated into an even broader range of systems, becoming indispensable in both consumer and enterprise applications. However, these capabilities also introduce significant security risks, most notably prompt injection.
PEHRT: A Common Pipeline for Harmonizing Electronic Health Record data for Translational Research
Gronsbell, Jessica, Panickan, Vidul Ayakulangara, Lin, Chris, Charlon, Thomas, Hong, Chuan, Zhou, Doudou, Wang, Linshanshan, Gao, Jianhui, Zhou, Shirley, Tian, Yuan, Shi, Yaqi, Gan, Ziming, Cai, Tianxi
Integrative analysis of multi-institutional Electronic Health Record (EHR) data enhances the reliability and generalizability of translational research by leveraging larger, more diverse patient cohorts and incorporating multiple data modalities. However, harmonizing EHR data across institutions poses major challenges due to data heterogeneity, semantic differences, and privacy concerns. To address these challenges, we introduce $\textit{PEHRT}$, a standardized pipeline for efficient EHR data harmonization consisting of two core modules: (1) data pre-processing and (2) representation learning. PEHRT maps EHR data to standard coding systems and uses advanced machine learning to generate research-ready datasets without requiring individual-level data sharing. Our pipeline is also data model agnostic and designed for streamlined execution across institutions based on our extensive real-world experience. We provide a complete suite of open source software, accompanied by a user-friendly tutorial, and demonstrate the utility of PEHRT in a variety of tasks using data from diverse healthcare systems.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Netherlands (0.04)
- Asia > Middle East > Israel (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Health Care Technology > Medical Record (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Multiple Sclerosis (0.93)
- Information Technology > Software (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Biomedical Informatics (1.00)
- (5 more...)
Latent Factor Point Processes for Patient Representation in Electronic Health Records
Knight, Parker, Zhou, Doudou, Xia, Zongqi, Cai, Tianxi, Lu, Junwei
Electronic health records (EHR) contain valuable longitudinal patient-level information, yet most statistical methods reduce the irregular timing of EHR codes into simple counts, thereby discarding rich temporal structure. Existing temporal models often impose restrictive parametric assumptions or are tailored to code level rather than patient-level tasks. We propose the latent factor point process model, which represents code occurrences as a high-dimensional point process whose conditional intensity is driven by a low dimensional latent Poisson process. This low-rank structure reflects the clinical reality that thousands of codes are governed by a small number of underlying disease processes, while enabling statistically efficient estimation in high dimensions. Building on this model, we introduce the Fourier-Eigen embedding, a patient representation constructed from the spectral density matrix of the observed process. We establish theoretical guarantees showing that these embeddings efficiently capture subgroup-specific temporal patterns for downstream classification and clustering. Simulations and an application to an Alzheimer's disease EHR cohort demonstrate the practical advantages of our approach in uncovering clinically meaningful heterogeneity.
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.93)
- Health & Medicine > Therapeutic Area > Neurology > Alzheimer's Disease (1.00)
- Health & Medicine > Health Care Technology > Medical Record (1.00)
Evaluating Retrieval-Augmented Generation vs. Long-Context Input for Clinical Reasoning over EHRs
Myers, Skatje, Dligach, Dmitriy, Miller, Timothy A., Barr, Samantha, Gao, Yanjun, Churpek, Matthew, Mayampurath, Anoop, Afshar, Majid
Electronic health records (EHRs) are long, noisy, and often redundant, posing a major challenge for the clinicians who must navigate them. Large language models (LLMs) offer a promising solution for extracting and reasoning over this unstructured text, but the length of clinical notes often exceeds even state-of-the-art models' extended context windows. Retrieval-augmented generation (RAG) offers an alternative by retrieving task-relevant passages from across the entire EHR, potentially reducing the amount of required input tokens. In this work, we propose three clinical tasks designed to be replicable across health systems with minimal effort: 1) extracting imaging procedures, 2) generating timelines of antibiotic use, and 3) identifying key diagnoses. Using EHRs from actual hospitalized patients, we test three state-of-the-art LLMs with varying amounts of provided context, using either targeted text retrieval or the most recent clinical notes. We find that RAG closely matches or exceeds the performance of using recent notes, and approaches the performance of using the models' full context while requiring drastically fewer input tokens. Our results suggest that RAG remains a competitive and efficient approach even as newer models become capable of handling increasingly longer amounts of text.
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Colorado (0.04)
- (3 more...)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > Middle East > Israel (0.04)
- (2 more...)
- Research Report > Experimental Study (0.67)
- Research Report > New Finding (0.46)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Health Care Technology > Medical Record (1.00)
- Health & Medicine > Consumer Health (1.00)
- (3 more...)
MedRep: Medical Concept Representation for General Electronic Health Record Foundation Models
Kim, Junmo, Lee, Namkyeong, Kim, Jiwon, Kim, Kwangsoo
Electronic health record (EHR) foundation models have been an area ripe for exploration with their improved performance in various medical tasks. Despite the rapid advances, there exists a fundamental limitation: Processing unseen medical codes out of vocabulary. This problem limits the generalizability of EHR foundation models and the integration of models trained with different vocabularies. To alleviate this problem, we propose a set of novel medical concept representations (MedRep) for EHR foundation models based on the observational medical outcome partnership (OMOP) common data model (CDM). For concept representation learning, we enrich the information of each concept with a minimal definition through large language model (LLM) prompts and complement the text-based representations through the graph ontology of OMOP vocabulary. Our approach outperforms the vanilla EHR foundation model and the model with a previously introduced medical code tokenizer in diverse prediction tasks. We also demonstrate the generalizability of MedRep through external validation.
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > United States > Nebraska (0.04)
- North America > United States > Maryland > Montgomery County > Rockville (0.04)
- Asia > China (0.04)
People who live to 100 all share a 'superhuman' ability, scientists discover... could YOU be one of them?
People who live to 100 appear to have a'superhuman' ability to avoid major illnesses, according to new research. Two large studies of older adults in Sweden have found that centenarians tend to develop fewer diseases, accumulate them more slowly, and in many cases avoid the most deadly age-related conditions altogether--despite living far longer than their peers. The work, published by an international research team, suggests that exceptional longevity is linked to a distinct pattern of ageing in which illness is delayed or even avoided entirely. The findings challenge the widely held belief that a longer life inevitably comes with more years of poor health. Researchers analysed decades of health records to compare people who reached 100 with those who died earlier but were born in the same years.